Goto

Collaborating Authors

 bringing deep learning


Bringing Deep Learning to the "Internet of Things"

#artificialintelligence

This branch of artificial intelligence curates your social media and serves your Google search results. Soon, deep learning could also check your vitals or set your thermostat. MIT researchers have developed a system that could bring deep learning neural networks to new – and much smaller – places, like the tiny computer chips in wearable medical devices, household appliances, and the 250 billion other objects that constitute the "internet of things" (IoT). The system, called MCUNet, designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and processing power. The technology could facilitate the expansion of the IoT universe while saving energy and improving data security.


Bringing deep learning to life

#artificialintelligence

Gaby Ecanow loves listening to music, but never considered writing her own until taking 6.S191 (Introduction to Deep Learning). By her second class, the second-year MIT student had composed an original Irish folk song with the help of a recurrent neural network, and was considering how to adapt the model to create her own Louis the Child-inspired dance beats. "It was cool," she says. "It didn't sound at all like a machine had made it." This year, 6.S191 kicked off as usual, with students spilling into the aisles of Stata Center's Kirsch Auditorium during Independent Activities Period (IAP).


Bringing Deep Learning for Geospatial Applications to Life

#artificialintelligence

Whenever we start to talk about artificial intelligence, machine learning, or deep learning, the cautionary tales from science fiction cinema arise: HAL 9000 from 2001: A Space Odyssey, the T-series robots from Terminator, replicants from Blade Runner, there are hundreds of stories about computers learning too much and becoming a threat. The crux of these movies always has one thing in common: there are things that computers do well and things that humans can do well, and they don't necessarily intersect. Computers are really good at crunching numbers and statistical analysis (deductive reasoning) and humans are really good at recognizing patterns and making inductive decisions using deductive data. Both have their strengths and their role. With the massive proliferation of data across platforms, types, and collection schedules, how are geospatial specialists supposed to address this apparently insurmountable task?


H2O-3 on FfDL: Bringing deep learning and machine learning closer together

#artificialintelligence

This post is co-authored by Animesh Singh, Nicholas Png, Tommy Li, and Vinod Iyengar. Deep learning frameworks like TensorFlow, PyTorch, Caffe, MXNet, and Chainer have reduced the effort and skills needed to train and use deep learning models. But for AI developers and data scientists, it's still a challenge to set up and use these frameworks in a consistent manner for distributed model training and serving. The open source Fabric for Deep Learning (FfDL) project provides a consistent way for AI developers and data scientists to use deep learning as a service on Kubernetes and to use Jupyter notebooks to execute distributed deep learning training for models written with these multiple frameworks. Now, FfDL is announcing a new addition that brings together that deep learning training capability with state-of-the-art machine learning methods.


Bringing deep learning to IoT devices

#artificialintelligence

Deep learning is well known for solving seemingly intractable problems in computer vision and natural language processing, but it typically does so by using massive CPU and GPU resources. Traditional deep learning techniques aren't well suited to addressing the challenges of Internet of Things (IoT) applications, however, because they can't apply the same level of computational resources. When running deep learning analysis on mobile devices, developers must adapt to a more resource-constrained platform. Image analysis on resource-constrained platforms can consume significant compute and memory resources. For example, the SpotGarbage app uses convolutional neural networks to detect garbage in images but consumes 83 percent of CPU and takes more than five seconds to respond. Fortunately, recent advances in network compression, approximate computing, and accelerators are enabling deep learning on resource-constrained IoT devices.


Bringing deep learning to big screen animation

#artificialintelligence

Modern films and TV shows are filled with spectacular computer-generated sequences computed by rendering systems that simulate the flow of light in a three-dimensional scene and convert the information into a two-dimensional image. But computing the thousands of light rays (per frame) to achieve accurate colour, shadows, reflectivity and other light-based characteristics is a labour-intensive, time-consuming and expensive undertaking. An alternative is to render the images using only a few light rays. That saves time and labour but results in inaccuracies that show up as objectionable "noise" in the final image. UC Santa Barbara electrical and computer engineering Ph.D. student Steve Bako and his advisor, Pradeep Sen, are advancing on a solution.